Gradient-free method for nonsmooth distributed optimization

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization

We analyze stochastic gradient algorithms for optimizing nonconvex, nonsmooth finite-sum problems. In particular, the objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a possibly non-differentiable but convex component. We propose a proximal stochastic gradient algorithm based on variance reduction, called ProxSVRG+. The algorithm is ...

متن کامل

A Derivative-free Method for Linearly Constrained Nonsmooth Optimization

This paper develops a new derivative-free method for solving linearly constrained nonsmooth optimization problems. The objective functions in these problems are, in general, non-regular locally Lipschitz continuous function. The computation of generalized subgradients of such functions is difficult task. In this paper we suggest an algorithm for the computation of subgradients of a broad class ...

متن کامل

An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization

We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that a...

متن کامل

A Push-Pull Gradient Method for Distributed Optimization in Networks

In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents’ cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estima...

متن کامل

An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization

xi=x̄i when ‖∇xif(x̄)‖2 ≤ λBi, it follows that x̄i = x̄i if and only if ‖∇xif(x̄)‖2 ≤ λBi. Hence, hi(x̄ ∗ i ) = 0. Case 2: Suppose that i ∈ Ic := N \ I, i.e., ‖∇xif(x̄)‖2 > λBi. In this case, x̄i 6= x̄i. From the first-order optimality condition, we have ∇xif(x̄) + Li(x̄i − x̄i) + λBi x̄ ∗ i −x̄i ‖x̄i −x̄i‖2 = 0. Let si := x̄∗i −x̄i ‖x̄i −x̄i‖2 and ti := ‖x̄i − x̄i‖2, then si = −∇xif(x̄) Liti+λBi . Since ‖si‖2 = 1, i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Global Optimization

سال: 2014

ISSN: 0925-5001,1573-2916

DOI: 10.1007/s10898-014-0174-2